12,037 research outputs found

    On the Compression of Translation Operator Tensors in FMM-FFT-Accelerated SIE Simulators via Tensor Decompositions

    Full text link
    Tensor decomposition methodologies are proposed to reduce the memory requirement of translation operator tensors arising in the fast multipole method-fast Fourier transform (FMM-FFT)-accelerated surface integral equation (SIE) simulators. These methodologies leverage Tucker, hierarchical Tucker (H-Tucker), and tensor train (TT) decompositions to compress the FFT'ed translation operator tensors stored in three-dimensional (3D) and four-dimensional (4D) array formats. Extensive numerical tests are performed to demonstrate the memory saving achieved by and computational overhead introduced by these methodologies for different simulation parameters. Numerical results show that the H-Tucker-based methodology for 4D array format yields the maximum memory saving while Tucker-based methodology for 3D array format introduces the minimum computational overhead. For many practical scenarios, all methodologies yield a significant reduction in the memory requirement of translation operator tensors while imposing negligible/acceptable computational overhead

    Twentieth-century Trends in the Annual Cycle of Temperature across the Northern Hemisphere

    Get PDF
    The annual cycle of surface air temperature is examined across Northern Hemisphere land areas (north of 25°N) by comparing the results from CRUTS against four reanalysis datasets: two versions of the Twentieth Century Reanalysis (20CR and 20CRC) and two versions of the ERA-CLIM reanalyses (ERA-20C and ERA-20CM). The Modulated Annual Cycle is adaptively derived from an Ensemble Empirical Mode Decomposition (EEMD) filter, and is used to define the phase and amplitude of the annual cycle. The EEMD method does not impose a simple sinusoidal shape of the annual cycle. None of the reanalysis simulations assimilate surface temperature data, but differ in the parameters that are included: both ERA-20C and 20CR assimilate surface pressure data; ERA-20C also includes surface wind data over the oceans; ERA-20CM does not assimilate any of these synoptic data; and none of the reanalyses assimilate land-use data. It is demonstrated that synoptic variability is critical for explaining the trends and variability of the annual cycle of surface temperature across the northern hemisphere. The CMIP5 forcings alone are insufficient to explain the observed trends and decadal-scale variability, particularly with respect to the decline in the amplitude of the annual cycle throughout the twentieth century. The variability in the annual cycle during the latter half of the twentieth century was unusual in the context of the twentieth century, and was most likely related to large-scale atmospheric variability, although uncertainty in the results is greatest before ca. 1930

    An Army of Me: Sockpuppets in Online Discussion Communities

    Full text link
    In online discussion communities, users can interact and share information and opinions on a wide variety of topics. However, some users may create multiple identities, or sockpuppets, and engage in undesired behavior by deceiving others or manipulating discussions. In this work, we study sockpuppetry across nine discussion communities, and show that sockpuppets differ from ordinary users in terms of their posting behavior, linguistic traits, as well as social network structure. Sockpuppets tend to start fewer discussions, write shorter posts, use more personal pronouns such as "I", and have more clustered ego-networks. Further, pairs of sockpuppets controlled by the same individual are more likely to interact on the same discussion at the same time than pairs of ordinary users. Our analysis suggests a taxonomy of deceptive behavior in discussion communities. Pairs of sockpuppets can vary in their deceptiveness, i.e., whether they pretend to be different users, or their supportiveness, i.e., if they support arguments of other sockpuppets controlled by the same user. We apply these findings to a series of prediction tasks, notably, to identify whether a pair of accounts belongs to the same underlying user or not. Altogether, this work presents a data-driven view of deception in online discussion communities and paves the way towards the automatic detection of sockpuppets.Comment: 26th International World Wide Web conference 2017 (WWW 2017

    New methodologies to characterize the effectiveness of the gene transfer mediated by DNA-chitosan nanoparticles

    Get PDF
    In this work three DNA-chitosan nanoparticle formulations (Np), differing in the molecular weight (MW; 150 kDa, 400 kDa, and 600 kDa) of the polysaccharide, were prepared and administered by two different administration routes: the hydrodynamics-based procedure and the intraduodenal injection. After the hydrodynamic injection, DNA-chitosan nanoparticles were predominantly accumulated in the liver, where the transgene was expressed during at least 105 days. No signifi cant infl uence of MW was observed on the levels of luciferase expression. The curves of bioluminescence versus time obtained using the charge-coupled device (CCD) camera were described and divided in three phases: (i) the initial phase, (ii) the sustained release step and (iii) the decline phase (promotor inactivation, immunological and physiological processes). From these curves, which describe the transgene expression profi le, the behavior of the different formulations as gene delivery systems was characterized. Therefore, the following parameters such as Cmax (maximum level of detected bioluminescence), AUC (area under the bioluminescence-time curve) and MET (mean time of the transgene expression) were calculated. This approach offers the possibility of studying and comparing transgene expression kinetics among a wide variety of gene delivery systems. Finally, the intraduodenal administration of naked DNA permitted the gene transfer in a dose dependent manner quantifi able with the CCD camera within 3 days. Nevertheless, the same administration procedure of the three formulations did not improve the levels of transgene expression obtained with naked DNA. This fact could be explained by the rapid physiological turn-over of enterocytes and by the ability of chitosan nanoparticles to control the DNA release

    Restudy on Dark Matter Time-Evolution in the Littlest Higgs model with T-parity

    Full text link
    Following previous study, in the Littlest Higgs model (LHM), the heavy photon is supposed to be a possible dark matter candidate and its relic abundance of the heavy photon is estimated in terms of the Boltzman-Lee-Weinberg time-evolution equation. The effects of the T-parity violation is also considered. Our calculations show that when Higgs mass MHM_H taken to be 300 GeV and don't consider T-parity violation, only two narrow ranges 133<MAH<135133<M_{A_{H}}<135 GeV and 167<MAH<169167<M_{A_{H}}<169 GeV are tolerable with the current astrophysical observation and if 135<MAH<167135<M_{A_{H}}<167 GeV, there must at least exist another species of heavy particle contributing to the cold dark matter. As long as the T-parity can be violated, the heavy photon can decay into regular standard model particles and would affect the dark matter abundance in the universe, we discuss the constraint on the T-parity violation parameter based on the present data. Direct detection prospects are also discussed in some detail.Comment: 13 pages, 11 figures include

    Antitumor effect of allogenic fibroblasts engineered to express Fas ligand (FasL)

    Get PDF
    Fas ligand is a type II transmembrane protein which can induce apoptosis in Fas-expressing cells. Recent reports indicate that expression of FasL in transplanted cells may cause graft rejection and, on the other hand, tumor cells may lose their tumorigenicity when they are engineered to express FasL. These effects could be related to recruitment of neutrophils by FasL with activation of their cytotoxic machinery. In this study we investigated the antitumor effect of allogenic fibroblasts engineered to express FasL. Fibroblasts engineered to express FasL (PA317/FasL) did not exert toxic effects on transformed liver cell line (BNL) or colon cancer cell line (CT26) in vitro, but they could abrogate their tumorigenicity in vivo. Histological examination of the site of implantation of BNL cells mixed with PA317/FasL revealed massive infiltration of polymorphonuclear neutrophils and mononuclear cells. A specific immune protective effect was observed in animals primed with a mixture of BNL or CT26 and PA317/FasL cells. Rechallenge with tumor cells 14 or 100 days after priming resulted in protection of 100 or 50% of animals, respectively. This protective effect was due to CD8+ cells since depletion of CD8+ led to tumor formation. In addition, treatment of pre-established BNL tumors with a subcutaneous injection of BNL and PA317/FasL cell mixture at a distant site caused significant inhibition of tumor growth. These data demonstrate that allogenic cells engineered with FasL are able to abolish tumor growth and induce specific protective immunity when they are mixed with neoplastic cells

    SWIFT: Scalable Wasserstein Factorization for Sparse Nonnegative Tensors

    Full text link
    Existing tensor factorization methods assume that the input tensor follows some specific distribution (i.e. Poisson, Bernoulli, and Gaussian), and solve the factorization by minimizing some empirical loss functions defined based on the corresponding distribution. However, it suffers from several drawbacks: 1) In reality, the underlying distributions are complicated and unknown, making it infeasible to be approximated by a simple distribution. 2) The correlation across dimensions of the input tensor is not well utilized, leading to sub-optimal performance. Although heuristics were proposed to incorporate such correlation as side information under Gaussian distribution, they can not easily be generalized to other distributions. Thus, a more principled way of utilizing the correlation in tensor factorization models is still an open challenge. Without assuming any explicit distribution, we formulate the tensor factorization as an optimal transport problem with Wasserstein distance, which can handle non-negative inputs. We introduce SWIFT, which minimizes the Wasserstein distance that measures the distance between the input tensor and that of the reconstruction. In particular, we define the N-th order tensor Wasserstein loss for the widely used tensor CP factorization and derive the optimization algorithm that minimizes it. By leveraging sparsity structure and different equivalent formulations for optimizing computational efficiency, SWIFT is as scalable as other well-known CP algorithms. Using the factor matrices as features, SWIFT achieves up to 9.65% and 11.31% relative improvement over baselines for downstream prediction tasks. Under the noisy conditions, SWIFT achieves up to 15% and 17% relative improvements over the best competitors for the prediction tasks.Comment: Accepted by AAAI-2
    corecore